Humans constantly interact with objects in daily life tasks. Capturing such processes and subsequently conducting visual inferences from a fixed viewpoint suffers from occlusions, shape and texture ambiguities, motions, etc. To mitigate the problem, it is essential to build a training dataset that captures free-viewpoint interactions. We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects. To process the HODome dataset, we develop NeuralDome, a layer-wise neural processing pipeline tailored for multi-view video inputs to conduct accurate tracking, geometry reconstruction and free-view rendering, for both human subjects and objects. Extensive experiments on the HODome dataset demonstrate the effectiveness of NeuralDome on a variety of inference, modeling, and rendering tasks. Both the dataset and the NeuralDome tools will be disseminated to the community for further development.
translated by 谷歌翻译
现代神经网络能够在涉及对象分类和图像生成的许多任务中执行至少和人类。然而,人类难以察觉的小扰动可能会显着降低训练有素的深神经网络的性能。我们提供了分布稳健的优化(DRO)框架,其集成了基于人的图像质量评估方法,以设计对人类来说难以察觉而难以察觉的最佳攻击,而是针对深度神经网络造成显着损害。通过广泛的实验,我们表明我们的攻击算法比其他最先进的人类难以察觉的攻击方法产生更好的质量(对人类)的攻击。此外,我们证明了使用我们最佳设计的人类难以察觉的攻击的DRO培训可以改善图像分类中的群体公平。在最后,我们提供了一种算法实现,以显着加速DRO训练,这可能是独立的兴趣。
translated by 谷歌翻译
We summarize our TRECVID 2022 Ad-hoc Video Search (AVS) experiments. Our solution is built with two new techniques, namely Lightweight Attentional Feature Fusion (LAFF) for combining diverse visual / textual features and Bidirectional Negation Learning (BNL) for addressing queries that contain negation cues. In particular, LAFF performs feature fusion at both early and late stages and at both text and video ends to exploit diverse (off-the-shelf) features. Compared to multi-head self attention, LAFF is much more compact yet more effective. Its attentional weights can also be used for selecting fewer features, with the retrieval performance mostly preserved. BNL trains a negation-aware video retrieval model by minimizing a bidirectionally constrained loss per triplet, where a triplet consists of a given training video, its original description and a partially negated description. For video feature extraction, we use pre-trained CLIP, BLIP, BEiT, ResNeXt-101 and irCSN. As for text features, we adopt bag-of-words, word2vec, CLIP and BLIP. Our training data consists of MSR-VTT, TGIF and VATEX that were used in our previous participation. In addition, we automatically caption the V3C1 collection for pre-training. The 2022 edition of the TRECVID benchmark has again been a fruitful participation for the RUCMM team. Our best run, with an infAP of 0.262, is ranked at the second place teamwise.
translated by 谷歌翻译
强化学习方法作为一种有前途的技术在自由浮动太空机器人的运动计划中取得了卓越的成果。但是,由于计划维度的增加和系统动态耦合的加剧,双臂自由浮动太空机器人的运动计划仍然是一个开放的挑战。特别是,由于缺乏最终效果的姿势约束,当前的研究无法处理捕获非合作对象的任务。为了解决该问题,我们提出了一种新型算法,即有效的算法,以促进基于RL的方法有效提高计划准确性。我们的核心贡献是通过先验知识指导构建一项混合政策,并引入无限规范以构建更合理的奖励功能。此外,我们的方法成功地捕获了具有不同旋转速度的旋转对象。
translated by 谷歌翻译
图像操纵检测的关键研究问题是如何学习对新型数据中的操纵敏感的宽大功能,而特定于防止在真实图像上的误报。目前的研究强调了敏感性,特异性主要忽略了。在本文中,我们通过多视图特征学习和多尺度监督来解决两个方面。通过利用篡改区域周围的噪声分布和边界伪影,前者旨在学习语义 - 不可知,更广泛的特征。后者允许我们从真实的图像中学习以通过依赖于语义分割损耗的现有技术来考虑非凡的图像。我们的想法是由我们术语MVSS-Net及其增强版MVSS-Net ++的新网络实现。六个公共基准数据集的综合实验证明了MVSS-Net系列的可行性,以实现像素级和图像级操作检测。
translated by 谷歌翻译
视觉位置识别(VPR)不仅对于自动驾驶车辆的定位和映射至关重要,而且对于视力受损的人群的辅助导航至关重要。为了大规模启用长期VPR系统,需要解决一些挑战。首先,不同的应用程序可能需要不同的图像视图方向,例如自动驾驶汽车的前视图,而低视力人的侧视图。其次,由于行人和车辆身份信息的成像,大都市场景中的VPR通常会引起隐私问题,呼吁在VPR查询和数据库构建之前需要数据匿名化。这两个因素都可能导致VPR性能变化,而尚未得到很好的理解。 To study their influences, we present the NYU-VPR dataset that contains more than 200,000 images over a 2km by 2km area near the New York University campus, taken within the whole year of 2016. We present benchmark results on several popular VPR algorithms showing that对于当前的VPR方法,侧视观点明显更具挑战性,而数据匿名的影响几乎可以忽略不计,以及我们的假设解释和深入的分析。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译